|
In the field of statistical learning theory, matrix regularization generalizes notions of vector regularization to cases where the object to be learned is a matrix. The purpose of regularization is to enforce conditions, for example sparsity or smoothness, that can produce stable predictive functions. For example, in the more common vector framework, Tikhonov regularization optimizes over : to find a vector, , that is a stable solution to the regression problem. When the system is described by a matrix rather than a vector, this problem can be written as : where the vector norm enforcing a regularization penalty on has been extended to a matrix norm on . Matrix Regularization has applications in matrix completion, multivariate regression, and multi-task learning. Ideas of feature and group selection can also be extended to matrices, and these can be generalized to the nonparametric case of multiple kernel learning. == Basic definition == Consider a matrix to be learned from a set of examples, , where goes from to , and goes from to . Let each input matrix be , and let be of size . A general model for the output can be posed as : where the inner product is the Frobenius inner product. For different applications the matrices will have different forms,〔Lorenzo Rosasco, Tomaso Poggio, "A Regularization Tour of Machine Learning — MIT-9.520 Lectures Notes" Manuscript, Dec. 2014.〕 but for each of these the optimization problem to infer can be written as : , with Forbenius inner product,. 抄文引用元・出典: フリー百科事典『 ウィキペディア(Wikipedia)』 ■ウィキペディアで「Matrix regularization」の詳細全文を読む スポンサード リンク
|